Non-sparse Multiple Kernel Learning
نویسندگان
چکیده
Approaches to multiple kernel learning (MKL) employ l1-norm constraints on the mixing coefficients to promote sparse kernel combinations. When features encode orthogonal characterizations of a problem, sparseness may lead to discarding useful information and may thus result in poor generalization performance. We study non-sparse multiple kernel learning by imposing an l2-norm constraint on the mixing coefficients. Empirically, l2-MKL proves robust against noisy and redundant feature sets and significantly improves the promoter detection rate compared to l1-norm and canonical MKL on large scales.
منابع مشابه
Sparse and Non-sparse Multiple Kernel Learning for Recognition
The development of Multiple Kernel Techniques has become of particular interest for machine learning researchers in Computer Vision topics like image processing, object classification, and object state recognition. Sparsity-inducing norms along with non-sparse formulations promote different degrees of sparsity at the kernel coefficient level, at the same time permitting non-sparse combination w...
متن کاملMultiple Kernel Learning for Object Classification
Combining information from various image descriptors has become a standard technique for image classification tasks. Multiple kernel learning (MKL) approaches allow to determine the optimal combination of such similarity matrices and the optimal classifier simultaneously. Most MKL approaches employ an `-regularization on the mixing coefficients to promote sparse solutions; an assumption that is...
متن کاملNon-Sparse Multiple Kernel Fisher Discriminant Analysis
Sparsity-inducing multiple kernel Fisher discriminant analysis (MK-FDA) has been studied in the literature. Building on recent advances in non-sparse multiple kernel learning (MKL), we propose a non-sparse version of MK-FDA, which imposes a general lp norm regularisation on the kernel weights. We formulate the associated optimisation problem as a semi-infinite program (SIP), and adapt an iterat...
متن کاملlp-Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this l1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...
متن کامل`p-Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...
متن کامل